排序方式: 共有43条查询结果,搜索用时 68 毫秒
11.
在故障树近似计算时,探讨通过加权提高计算准确度的简便方法。根据底事件发生概率的不同,利用VisualC 语言进行蒙特卡罗仿真计算,得到了在近似计算故障树顶事件发生概率时,在不同底事件发生概率对应区间内的最优权因子。利用该方法得到的最优权因子可以提高故障树近似计算的准确度,尤其是在当底事件发生概率较大时,其效果更为明显,可较好地满足采用故障树对某些特殊设备进行顶事件发生概率计算时提高计算精度的需求。 相似文献
12.
13.
14.
介绍了一种基于估计熵的自适应模糊滤波器 ,并将其应用于宽带噪声中火箭遥测速变信号的数字滤波。讨论了自适应模糊滤波算法 ,给出了应用实验结果。分析和实验表明 ,这种新型滤波器能根据信号的复杂程度自动调节其参数 ,对宽带噪声中非平稳随机信号有较好的滤波效果 相似文献
15.
为使任意大小、任意距离的两同轴圆线圈之间的互感系数能用关系简单明了的常用解析函数表示,先对用毕奥-萨伐尔定律导出的互感系数积分公式进行数值计算,然后通过对极端情况的分析和对非极端情况进行猜想,采用幂级数、指数、对数或其组合进行试探,再用数值计算验证,最终尝试性地分6种情形给出了两同轴圆线圈在任意大小、任意距离下(除两线圈几乎重合外)互感系数的近似解析表达式。函数表达式比较简单,物理意义比较明确,且相对误差在±5%以内。 相似文献
16.
In this study, we illustrate a real‐time approximate dynamic programming (RTADP) method for solving multistage capacity decision problems in a stochastic manufacturing environment, by using an exemplary three‐stage manufacturing system with recycle. The system is a moderate size queuing network, which experiences stochastic variations in demand and product yield. The dynamic capacity decision problem is formulated as a Markov decision process (MDP). The proposed RTADP method starts with a set of heuristics and learns a superior quality solution by interacting with the stochastic system via simulation. The curse‐of‐dimensionality associated with DP methods is alleviated by the adoption of several notions including “evolving set of relevant states,” for which the value function table is built and updated, “adaptive action set” for keeping track of attractive action candidates, and “nonparametric k nearest neighbor averager” for value function approximation. The performance of the learned solution is evaluated against (1) an “ideal” solution derived using a mixed integer programming (MIP) formulation, which assumes full knowledge of future realized values of the stochastic variables (2) a myopic heuristic solution, and (3) a sample path based rolling horizon MIP solution. The policy learned through the RTADP method turned out to be superior to polices of 2 and 3. © 2010 Wiley Periodicals, Inc. Naval Research Logistics 2010 相似文献
17.
18.
Consider a patrol problem, where a patroller traverses a graph through edges to detect potential attacks at nodes. An attack takes a random amount of time to complete. The patroller takes one time unit to move to and inspect an adjacent node, and will detect an ongoing attack with some probability. If an attack completes before it is detected, a cost is incurred. The attack time distribution, the cost due to a successful attack, and the detection probability all depend on the attack node. The patroller seeks a patrol policy that minimizes the expected cost incurred when, and if, an attack eventually happens. We consider two cases. A random attacker chooses where to attack according to predetermined probabilities, while a strategic attacker chooses where to attack to incur the maximal expected cost. In each case, computing the optimal solution, although possible, quickly becomes intractable for problems of practical sizes. Our main contribution is to develop efficient index policies—based on Lagrangian relaxation methodology, and also on approximate dynamic programming—which typically achieve within 1% of optimality with computation time orders of magnitude less than what is required to compute the optimal policy for problems of practical sizes. © 2014 Wiley Periodicals, Inc. Naval Research Logistics, 61: 557–576, 2014 相似文献
19.
针对适应值计算费时的优化问题,提出一种具有适应值预测机制的遗传算法:为了有效控制预测适应值的准确度和预测频率,建立了一个基于可信度概念的适应值预测模型,引入可信度流失机制以减少预测误差的传播和累积,引入冗余个体剔除机制以减少计算消耗。利用3个基准函数对算法进行收敛性和有效性的测试,测试结果表明算法对于3个测试函数均能获得满意的最优解,并且都能减少60%以上的真实适应值计算次数。 相似文献
20.